m2 <-brm(data = Kline, family = poisson,bf(total_tools ~ a + b*P, a + b ~0+ contact,nl =TRUE), prior =c( prior( normal(3, 0.5), nlpar = a),prior( normal(0, 0.2), nlpar = b)),iter =2000, warmup =1000, chains =4, cores =4, seed =11,file =here("files/data/generated_data/m52.2"))
m1
Family: poisson
Links: mu = log
Formula: total_tools ~ 1
Data: Kline (Number of observations: 10)
Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup draws = 4000
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 3.54 0.05 3.44 3.64 1.00 1716 1970
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
m2
Family: poisson
Links: mu = log
Formula: total_tools ~ a + b * P
a ~ 0 + contact
b ~ 0 + contact
Data: Kline (Number of observations: 10)
Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup draws = 4000
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
a_contacthigh 2.84 0.47 1.92 3.74 1.00 1378 1405
a_contactlow 1.69 0.29 1.13 2.25 1.00 1439 1323
b_contacthigh 0.09 0.05 -0.01 0.19 1.00 1399 1489
b_contactlow 0.19 0.03 0.14 0.25 1.00 1444 1295
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
What do these mean?
Once we’ve moved outside of the Gaussian distribution, your best bet is to push everything back through the posterior. Do NOT try and evaluate the model estimates.
The data in data(Primates301) are 301 primate species and associated measures. Model the number of observations of social_learning for each species as a function of the log brain size. Use a Poisson distribution for the social_learning outcome variable. Interpret the resulting posterior.
Returning to the tools example, McElreath points to a theoretical issue with the plots. He proposes to solve this with a better representation of the relationship between tools and time.
\[
\Delta T = \alpha P^{\beta}-\gamma T
\]
\(\Delta\) = change
\(\alpha\) = innovation rate
\(\beta\) = diminishing returns (elasticity)
\(\gamma\) = rate of loss
In this case, both \(\alpha\) and \(\beta\) are moderated by contact.
Note that in this case we’re not using the log link function. Wild! This is because all our parameters must be positive. There are two ways to do this: the first is to use appropriate priors that constrain the parameters. This is nice for transparency, but computationally more difficult. The other option is to use the exponential function (see on a). This code does both.
Returning to the primate example: Some species are studied much more than others. So the number of reported instances of social_learning could be a product of research effort. Use the research_effort variable, specifically its logarithm, as an additional predictor variable. Interpret the coefficient for log research_effort. How does this model differ from the previous one?